explainable ai approach
Toward explainable AI approaches for breast imaging: adapting foundation models to diverse populations
Cavalcante, Guilherme J., Moreira, José Gabriel A., Nascimento, Gabriel A. B. do, Dong, Vincent, Nguyen, Alex, Rêgo, Thaís G. do, Malheiros, Yuri, Filho, Telmo M. Silva, Torrez, Carla R. Zeballos, Gee, James C., McCarthy, Anne Marie, Maidment, Andrew D. A., Barufaldi, Bruno
Foundation models hold promise for specialized medical imaging tasks, though their effectiveness in breast imaging remains underexplored. This study leverages BiomedCLIP as a foundation model to address challenges in model generalizations. BiomedCLIP was adapted for automated BI-RADS breast density classification using multi-modality mammographic data (synthesized 2D images, digital mammography, and digital breast tomosyn-thesis). Using 96,995 images, we compared single-modality (s2D only) and multi-modality training approaches, addressing class imbalance through weighted contrastive learning. Both approaches achieved similar accuracy (multi-modality: 0.74, single-modality: 0.73), with the multi-modality model offering broader applicability across different imaging modalities and higher AUC values consistently above 0.84 across BI-RADS categories. External validation on the RSNA and EMBED datasets showed strong generalization capabilities (AUC range: 0.80-0.93). GradCAM visualizations confirmed consistent and clinically relevant attention patterns, highlighting the models' interpretability and robustness. This research underscores the potential of foundation models for breast imaging applications, paving the way for future extensions for diagnostic tasks.
- North America > United States > Pennsylvania > Philadelphia County > Philadelphia (0.14)
- South America > Brazil (0.04)
- Europe > United Kingdom > England > Bristol (0.04)
- Health & Medicine > Diagnostic Medicine > Imaging (1.00)
- Health & Medicine > Therapeutic Area > Oncology > Breast Cancer (0.37)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Machine Learning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Explanation & Argumentation (0.40)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (0.40)
#AAAI2024 workshops round-up 4: eXplainable AI approaches for deep reinforcement learning, and responsible language models
Deep reinforcement learning (DRL) has recently made remarkable progress in several application domains, such as games, finance, autonomous driving, and recommendation systems. However, the black-box nature of deep neural networks and the complex interaction among various factors raise challenges in understanding and interpreting the models' decision-making processes. This workshop brought together researchers, practitioners, and experts from both the DRL and the explainable AI communities to focus on methods, techniques, and frameworks that enhance the explainability and interpretability of DRL algorithms. The responsible language models (ReLM) workshop focused on the development, implementation, and applications of LMs aligned with responsible AI principles. Both theoretical and practical challenges regarding the design and deployment of responsible LMs were discussed, including bias identification and quantification, bias mitigation, transparency, privacy and security issues, hallucination, uncertainty quantification, and various other risks associated with LMs.
- North America > United States > Michigan (0.05)
- North America > United States > Indiana (0.05)
- North America > Canada > Quebec > Montreal (0.05)
- (3 more...)
Concept backpropagation: An Explainable AI approach for visualising learned concepts in neural network models
Hammersborg, Patrik, Strümke, Inga
Neural network models are widely used in a variety of domains, often as black-box solutions, since they are not directly interpretable for humans. The field of explainable artificial intelligence aims at developing explanation methods to address this challenge, and several approaches have been developed over the recent years, including methods for investigating what type of knowledge these models internalise during the training process. Among these, the method of concept detection, investigates which \emph{concepts} neural network models learn to represent in order to complete their tasks. In this work, we present an extension to the method of concept detection, named \emph{concept backpropagation}, which provides a way of analysing how the information representing a given concept is internalised in a given neural network model. In this approach, the model input is perturbed in a manner guided by a trained concept probe for the described model, such that the concept of interest is maximised. This allows for the visualisation of the detected concept directly in the input space of the model, which in turn makes it possible to see what information the model depends on for representing the described concept. We present results for this method applied to a various set of input modalities, and discuss how our proposed method can be used to visualise what information trained concept probes use, and the degree as to which the representation of the probed concept is entangled within the neural network model itself.
- North America > United States > California (0.04)
- Europe > Norway > Central Norway > Trøndelag > Trondheim (0.04)
explainable-artificial-intelligence-for-human-machine-interaction-in-brain-tumor-localization
Primary malignancies in adult brains are globally fatal. Computer vision, especially recent developments in artificial intelligence (AI), have created opportunities to automatically characterize and diagnose tumor lesions in the brain. AI approaches have provided scores of unprecedented accuracy in different image analysis tasks, including differentiating tumor-containing brains from healthy brains. AI models, however, perform as a black box, concealing the rational interpretations that are an essential step towards translating AI imaging tools into clinical routine. An explainable AI approach aims to visualize the high-level features of trained models or integrate into the training process.
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Explanation & Argumentation (0.82)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.44)
Explainable artificial intelligence for mechanics: physics-informing neural networks for constitutive models
Koeppe, Arnd, Bamer, Franz, Selzer, Michael, Nestler, Britta, Markert, Bernd
(Artificial) neural networks have become increasingly popular in mechanics as means to accelerate computations with model order reduction techniques and as universal models for a wide variety of materials. However, the major disadvantage of neural networks remains: their numerous parameters are challenging to interpret and explain. Thus, neural networks are often labeled as black boxes, and their results often elude human interpretation. In mechanics, the new and active field of physics-informed neural networks attempts to mitigate this disadvantage by designing deep neural networks on the basis of mechanical knowledge. By using this a priori knowledge, deeper and more complex neural networks became feasible, since the mechanical assumptions could be explained. However, the internal reasoning and explanation of neural network parameters remain mysterious. Complementary to the physics-informed approach, we propose a first step towards a physics-informing approach, which explains neural networks trained on mechanical data a posteriori. This novel explainable artificial intelligence approach aims at elucidating the black box of neural networks and their high-dimensional representations. Therein, the principal component analysis decorrelates the distributed representations in cell states of RNNs and allows the comparison to known and fundamental functions. The novel approach is supported by a systematic hyperparameter search strategy that identifies the best neural network architectures and training parameters. The findings of three case studies on fundamental constitutive models (hyperelasticity, elastoplasticity, and viscoelasticity) imply that the proposed strategy can help identify numerical and analytical closed-form solutions to characterize new materials.
- Europe > Germany > Baden-Württemberg > Karlsruhe Region > Karlsruhe (0.05)
- North America > United States > New York (0.04)
- Europe > Germany > North Rhine-Westphalia > Cologne Region > Aachen (0.04)
- (2 more...)
- Transportation (0.54)
- Health & Medicine (0.46)